4 research outputs found

    Event-driven implementation of deep spiking convolutional neural networks for supervised classification using the SpiNNaker neuromorphic platform

    Get PDF
    Neural networks have enabled great advances in recent times due mainly to improved parallel computing capabilities in accordance to Moore’s Law, which allowed reducing the time needed for the parameter learning of complex, multi-layered neural architectures. However, with silicon technology reaching its physical limits, new types of computing paradigms are needed to increase the power efficiency of learning algorithms, especially for dealing with deep spatio-temporal knowledge on embedded applications. With the goal of mimicking the brain’s power efficiency, new hardware architectures such as the SpiNNaker board have been built. Furthermore, recent works have shown that networks using spiking neurons as learning units can match classical neural networks in supervised tasks. In this paper, we show that the implementation of state-of-the-art models on both the MNIST and the event-based NMNIST digit recognition datasets is possible on neuromorphic hardware. We use two approaches, by directly converting a classical neural network to its spiking version and by training a spiking network from scratch. For both cases, software simulations and implementations into a SpiNNaker 103 machine were performed. Numerical results approaching the state of the art on digit recognition are presented, and a new method to decrease the spike rate needed for the task is proposed, which allows a significant reduction of the spikes (up to 34 times for a fully connected architecture) while preserving the accuracy of the system. With this method, we provide new insights on the capabilities offered by networks of spiking neurons to efficiently encode spatio-temporal information.Consejo Nacional de Ciencia Y Tecnología (México) FC2016-1961European Union's Horizon 2020 No 824164 HERMESMinisterio de Ciencia, Innovación y Universidades TEC2015-63884-C2-1-

    Empirical study on the efficiency of Spiking Neural Networks with axonal delays, and algorithm-hardware benchmarking

    Full text link
    The role of axonal synaptic delays in the efficacy and performance of artificial neural networks has been largely unexplored. In step-based analog-valued neural network models (ANNs), the concept is almost absent. In their spiking neuroscience-inspired counterparts, there is hardly a systematic account of their effects on model performance in terms of accuracy and number of synaptic operations.This paper proposes a methodology for accounting for axonal delays in the training loop of deep Spiking Neural Networks (SNNs), intending to efficiently solve machine learning tasks on data with rich temporal dependencies. We then conduct an empirical study of the effects of axonal delays on model performance during inference for the Adding task, a benchmark for sequential regression, and for the Spiking Heidelberg Digits dataset (SHD), commonly used for evaluating event-driven models. Quantitative results on the SHD show that SNNs incorporating axonal delays instead of explicit recurrent synapses achieve state-of-the-art, over 90% test accuracy while needing less than half trainable synapses. Additionally, we estimate the required memory in terms of total parameters and energy consumption of accomodating such delay-trained models on a modern neuromorphic accelerator. These estimations are based on the number of synaptic operations and the reference GF-22nm FDX CMOS technology. As a result, we demonstrate that a reduced parameterization, which incorporates axonal delays, leads to approximately 90% energy and memory reduction in digital hardware implementations for a similar performance in the aforementioned task

    Phoneme Recognition System Using Articulatory-Type Information

    No full text
    This work is frameworked within the development of phoneme recognition systems and seeks to establish whether the incorporation of information related to the movement of the articulators helps to improve the performance thereof. For this purpose, a pair of systems is compared and developed, where the acoustic model is obtained from training hidden Markov chains. The first system represents the voice signal by Mel Frequency Cepstral Coefficients; the second uses the same Cepstral coefficients but together with articulatory parameters. The experiments were conducted on the MOCHA-TIMIT database. The results show a significant increase in the system´s performance by adding articulatory parameters compared to that based only on Mel Frequency Cepstral CoefficientsEl presente trabajo se enmarca dentro del desarrollo de sistemas de reconocimiento de fonemas y busca establecer si la incorporación de información relacionada con el movimiento de los articuladores ayuda a mejorar el desempeño de los mismos. Para ello, se desarrollan y comparan un par de sistemas, donde el modelo acústico se obtiene a partir del entrenamiento de cadenas ocultas de Markov. El primer sistema representa la señal de voz mediante coeficientes cepstrales en la escala Mel; y el segundo, utiliza los mismos coeficientes cepstrales pero en conjunto con parámetros articulatorios. Los experimentos fueron realizados sobre la base de datos MOCHA-TIMIT. Los resultados muestran un incremento significativo en el desempeño del sistema al agregar parámetros articulatorios con respecto a sistemas basados en coeficientes cepstrales en la escala Mel

    Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks

    No full text
    Liquid State Machines (LSMs) are computing reservoirs composed of recurrently connected Spiking Neural Networks which have attracted research interest for their modeling capacity of biological structures and as promising pattern recognition tools suitable for their implementation in neuromorphic processors, benefited from the modest use of computing resources in their training process. However, it has been difficult to optimize LSMs for solving complex tasks such as event-based computer vision and few implementations in large-scale neuromorphic processors have been attempted. In this work, we show that offline-trained LSMs implemented in the SpiNNaker neuromorphic processor are able to classify visual events, achieving state-of-the-art performance in the event-based N-MNIST dataset. The training of the readout layer is performed using a recent adaptation of back-propagation-through-time (BPTT) for SNNs, while the internal weights of the reservoir are kept static. Results show that mapping our LSM from a Deep Learning framework to SpiNNaker does not affect the performance of the classification task. Additionally, we show that weight quantization, which substantially reduces the memory footprint of the LSM, has a small impact on its performance
    corecore